专利摘要:
BELT WEAR MEASUREMENT THROUGH EDGE DETECTION OF A RASTER IMAGE. Tools are provided to determine belt wear. Specifically, non-contact base systems and processes are described, which enable a fast and accurate measurement of belt wear. Based on belt wear measurements a belt wear condition can be determined. Information regarding the wear condition can then be used to determine an appropriate remediation measure to respond to the determined wear condition. 20651041v1
公开号:BR112014005078B1
申请号:R112014005078-3
申请日:2012-09-06
公开日:2021-07-06
发明作者:Flloyd M. Sobczak;James H. Sealey;Douglas R. Sedlacek;Mark E. Stuemky;Justin L. Aschenbrenner
申请人:The Gates Corporation;
IPC主号:
专利说明:

DESCRIPTION FIELD
[0001] The present disclosure is generally directed towards measuring belt wear and, more specifically, using images to identify belt wear characteristics and predict belt life. BACKGROUND
[0002] Serpentine drive belts are becoming increasingly durable due to the use of ethylene propylene diene monomer (EPDM) materials. As a result, a historically reliable belt wear identifier, cracking, occurs less frequently, although belts continue to wear over time. A problem that exists due to the use of these advanced materials is that wear pre-failure detection is increasingly difficult to quantify. In other words, serpentine drive belts made of EPDM materials are commonly only diagnosed as excessively worn after complete belt failure.
[0003] Recent advances to address the problem identified above require a physical tool that is contacted with a belt being measured. Examples of such tools are described in US Patent number 7,946,047 and US Patent Publication number 2010/0307221 both to Smith et al., each of which is hereby incorporated by reference in its entirety. These solutions rely on a physical contact between the measuring tool and the belt being measured.
[0004] It would be useful to develop a belt measurement solution that does not rely on physical contact between a tool and the belt being measured, that can quickly and effectively identify belt wear. SUMMARY
[0005] It is in relation to the above aspects and other problems that the modalities presented here have been considered. In particular, at least some of the embodiments depicted here provide a belt measuring solution that does not require physical contact between a measuring tool and the belt being measured. More specifically, embodiments of the present disclosure consider a single measurement in the image-based measurement solution, in which one or more images of a belt can be captured. The captured belt images can then be analyzed and compared to a belt image database. This analysis and comparison, in some embodiments, results in a wear measurement calculation that can then be used to quantify an amount of wear the belt has experienced.
[0006] Additionally, belt image data can be used for fine tuning and optimized operation of belt equipment to improve equipment performance.
[0007] In some embodiments, belt wear measurement is achieved by detecting the rib width edge of a raster image.
[0008] In some embodiments edge detection and/or rib width detection includes a process of capturing one or more images of the belt being measured by performing a Gaussian defocus operation on one or more images and then performing a conversion grayscale of the one or more images, and then performing a binary conversion of the one or more images, and then performing a skillful edge detection operation on the one or more images.
[0009] In some embodiments for single filament V-belts automatic threshold adjustments can be determined for binary conversion which produces a region by sampling at discrete gray threshold levels or, alternatively, red, green, and blue (RGB) values and median value of all thresholds that detect an edge region. In other modalities for multi-filament belts (where the ribs run either parallel to or perpendicular to the belt's movement), sampled at discrete levels of gray intensity or, alternatively, RGB values, the numbers of ribs in the images are determined by the median value of the ordered sequence of the detected edge region mode. In other embodiments the belt width and pixel to physics ratio (eg, pixel/mm or pixel/in) can be determined by multiplying the known rib spacing of a particular belt by the detected rib region count and the outer edges detected from the belt. In other embodiments the rib region may comprise an outer wrap of four edges of the detected rib area as determined by the skillful edge detection algorithm or other edge detection algorithm.
[00010] In some embodiments, a method is provided to identify a belt as excessively worn or failed by determining that (i) the detected number of ribs, and (ii) the rib width, and/or (iii) the pixel/ratio mm or pixel/in in an image are inconsistent with the optimal thresholds of such metrics. In particular, these metrics for a belt can be processed through a reject/validation algorithm that comprises these metrics with known or preferred metrics for the same type of belt and whether the comparison reveals that the belt being measured is not within of a predetermined range of ideal thresholds, then the belt being measured can be rejected.
[00011] The term 'automatic' and variations thereof, as used herein, refer to any process or operation made with materials without input of human material when the process or operation is carried out. However, a process or operation can be automatic even though process or operation performance uses material or immaterial human input if input is received prior to process or operation performance. Human input is deemed to be material if such input influences how the process operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be material.
[00012] The term "computer readable medium" as used herein refers to any tangible storage that participates in providing instructions to a processor for execution. Such media can take various forms, including, but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, NVRAM, or magnetic or optical disks. Volatile media include dynamic memory, such as main memory. Common forms of computer readable media include, for example, a floppy disk, floppy disk, hard disk, magnetic tape, or any other magnetic media, magnetic-optical media, a CD-ROM, any other optical media, or punched cards , paper tape, and any other physical medium with hole patterns, a RAM, a PROM, and EPROM, a FLASH-EPROM, a solid state medium such as a memory card or any other memory chip, or cartridge, or any other medium from which a computer can read. When the computer-readable medium is configured as a database, it should be understood that the database can be any type of database, such as relational, hierarchical, object-oriented and/or similar. Accordingly, the disclosure is considered to include a tangible storage medium and recognized equivalents of the prior art and successor means, in which the software implementations of the present disclosure are stored.
[00013] The terms "determines", "calculates, and "computes", and variations thereof, as used herein, are used interchangeably and include any type of mathematical methodology, process, operation, or technique.
[00014] The term "module" as used herein refers to any hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and software that is capable of performing the functionality associated with that element. Also, although the disclosure is described in terms of modalities taken by way of example, it should be appreciated that individual aspects of the disclosure can be claimed separately. BRIEF DESCRIPTION OF THE DRAWINGS
[00015] Arrangements are described in conjunction with Figs. appended: Fig. 1 is a block diagram outlining a measurement system in accordance with embodiments of the present disclosure; Fig. 2 is a block diagram outlining a communication system in accordance with embodiments of the present disclosure; Fig. 3 is a block diagram outlining an example of a data structure used in accordance with embodiments of the present disclosure; Fig. 4 is a flowchart outlining a process for determining belt wear in accordance with embodiments of the present disclosure; Fig. 5 is a diagram outlining a contrast threshold in accordance with embodiments of the present disclosure; Fig. 6 outlines image components in accordance with embodiments of the present disclosure; Fig. 7A outlines a first image in accordance with embodiments of the present disclosure; Fig. 7B outlines a second image in accordance with embodiments of the present disclosure; Fig. 7C outlines a third image in accordance with embodiments of the present disclosure; Fig. 7D outlines a fourth image in accordance with embodiments of the present disclosure; Fig. 8 outlines a user interface for a belt measurement application in accordance with embodiments of the present disclosure; Fig. 9 outlines a data sample indicating a number of ribs detected from a belt image as a threshold level function; Fig. 10 plots a histogram of the data sample from Fig. 9 showing the number of occurrences for each rib count; and Fig. 11 outlines a filtered data set where data from the highest frequency of rib counts are used in determining a usable threshold level. DETAILED DESCRIPTION
[00016] The description that follows provides modalities only, and is not intended to limit the scope, applicability or configuration of the claims. Instead, the description that follows will provide those skilled in the art with a description that makes it possible to implement the described modalities. It being understood that several changes can be made to the function and arrangement of elements without departing from the spirit and scope of the appended claims.
[00017] Although aspects of the present disclosure are discussed in connection with measuring belt wear and specifically serpentine belts that have one or more ribs, those of ordinary skill in the art will appreciate that embodiments of the present disclosure are not so limited. In particular, some or all of the algorithms, steps, procedures, or system components described herein may be employed in measuring friction induced wear and frictionless induced wear of any device or collection of devices. For example, it is considered that the technology described here could be used effectively to measure wear on any moving part, including gears, wheels, bearings, etc.
[00018] For example, embodiments of the present disclosure can be used to capture one or more images of an object that undergoes expansion and/or contraction due to thermal fluctuations. These expansions and/or contractions can result in the object becoming fatigued and eventually failing. The image-based measurement processes disclosed herein can be used to analyze objects that undergo such thermal expansion and/or contraction and, if necessary, identify objects as being good, (for example, not needing a replacement), or bad, (by example, requiring a replacement).
[00019] Additionally, the modalities described here can be used for certain conditions of wear induced by friction for any type of belt. As an example, the belt that is discussed can be constructed of any material or combination of materials, which include, without limitation, rubber, polymer, polyethylene, carbon fiber, kevlar, metal, polyvinyl chloride and/or foam.
[00020] Referring now to Fig. 1, a measurement system 100 will be described in accordance with embodiments of the present disclosure. The measurement system 100 may comprise one or more components for analyzing an object under test 104, and classifying the object under test 104 as good, for example, not requiring a replacement, or bad, for example, requiring a replacement. Other determinations can be made for the object under test 104 without departing from the scope of the present disclosure; for example, the object under test 104 may be identified as failing (eg shortly requiring replacement), or abnormal (eg not following an expected wear pattern and therefore requiring further investigation and/or replacement).
[00021] In some embodiments the measurement system 100 comprises an image capture device 108, an image conversion module 112, an analysis module 116, an object database 120, a reporting module 124 and an interface 128. In some embodiments each component of system 100 may be included in a single computing device that is owned and operated by a user 132. As an example, each component of measurement system 100 may be included in a user device such as a cell phone, smart phone, personal computer (PC), laptop, netbook, tablet, or similar. In other embodiments, the components of measurement system 100 may be distributed between two or more different computing devices.
[00022] In some embodiments the image capture device 108 is configured to capture one or more images of the object under test 104. As a non-limiting example, the object under test 104 may comprise a belt, specifically a serpentine belt made of materials EPDM. The belt can be located in an operational position, for example, mounted on a vehicle or other device that employs the belt, or it can be in a non-operational position, for example, removed from a vehicle or other device that employs the belt. The image capture device 108 may be capable of capturing one or more still images. Alternatively or in addition, the image capture device 108 may be capable of capturing video images, for example, a sequenced number of image frames which may or may not be synchronized with an audio input. The images captured by the image capture device 108 may comprise color (e.g., a pixel image where each pixel comprises a red, green, and blue (RGB) pixel value, gray scale (e.g., pixel image where each pixel comprises a grayscale pixel value between zero and a predetermined number such as 225), black and white (e.g. a pixel image where each pixel comprises a binary value that corresponds to either black or white), infrared (eg a pixel image where each pixel comprises an infrared pixel value), ultraviolet (eg a pixel image where each pixel comprises an ultraviolet value) or any other known type of image. A non-limiting example device An image capture device 108 is a camera (still or video) that is either a standalone device, or is embedded in a user device such as a smart phone.
[00023] Depending on the nature of the image it may be necessary to reformat the image, (for example, convert it from one image type to another image type) before processing the image and analyzing characteristics of the object under test 104. In some modalities an image conversion module 112 is provided for performing image conversion of the images captured by the image capture device 108 before the images are analyzed by the analysis module 116. In some embodiments the image conversion module 112 may comprise one or plus image filters and/or processing algorithms to detect contour edges of the object under test 104 in the images. As a non-limiting example, the image conversion module 112 is configured to convert grayscale and/or RGB images to black and white images by performing one or more edge detection processes.
[00024] The analysis module 116, in some embodiments, is configured to analyze the images of the object under test 104 to determine one or more physical characteristics of the object under test 104. In some embodiments the analysis module 116 may reference a object database 120 to determine the physical characteristics of the object under test 104. Specifically, the object database 120 may comprise data for a number of different types of objects, for example, serpentine belts, and each of different types of objects can have example images for a good object of that type, and of a bad object of that type. The analysis module 116 in some embodiments can compare the images captured by the image capture device 108 with the images in the object database 120 to determine whether the object under test 104 is more similar to a good object or an object. bad. Alternatively or in addition, the object database 120 may comprise one or more metrics and thresholds that define acceptable or unacceptable measurements for each of the different types of objects. In such an embodiment, rather than comparing real images, the analysis module 116 can extract physical dimension data and other physical metrics from the images and compare the physical dimension data of the objects under test 104 with the metric and thresholds on the object in the database. object data 120. This comparison can be used to determine whether the object under test 104 is a good object or a bad object.
[00025] Results obtained by the analysis module 116 can be provided to a reporting module 124 which is configured to prepare a report of the analysis results for the user 132. In some embodiments a reporting module 124 can comprise functionality to format a report which identifies whether the object under test 104 has been identified as good or bad, and whether any further steps should be taken in connection with such a determination. The reporting module 124 may also comprise functionality to transmit the report to the user interface 128. As a non-limiting example, the reporting module 124 may comprise functionality to prepare a report (or report statistics), and transmit the report to the 128 user interface through any type of communication protocol.
[00026] User interface 128 may comprise a user input and/or user output device. In some embodiments, the user interface 128 comprises a user-exit that enables the user 132 to view the results of the report generated by the report module 124.
[00027] Referring now to Fig. 2, the communication system 200 in which the measurement system 100 may be incorporated will be described in accordance with embodiments of the present disclosure. Communication system 200 may comprise one or more devices that incorporate the components of measurement system 100. Although certain components of measurement system 100 are delineated as being included in certain components of communication system 200, those skilled in communication techniques will appreciate that the various components of measurement system 100 can be distributed among one or more devices of communication system 200 in any number of ways. For example, all components of measurement system 100 can be included in a single user device 208. Alternatively, one or more components of measurement system 100 can be provided in one or more other user devices in one or more servers 240, a collection of servers or any other similar device.
[00028] In some embodiments, the communication system 200 comprises a user device 208 in communication with one or more servers 240 via a communication network 204. The user device 208 in some embodiments comprises any type of known communication device, or collection of communication devices. Examples of a suitable user device 208 include, but are not limited to, a personal computer (PC), laptop, netbook, personal digital assistant (PDA), cell phone, smart phone, telephone, or combinations thereof. In general, user device 208 can be adapted to support video, image, audio, text, and/or data communications with other user devices, as well as server 240. The type of medium used by user device 208 to communicate with others user devices or server 240 may depend on the nature of communication module 228 available in user device 208.
[00029] Server 240 may comprise any type of processing resources dedicated to performing certain functions discussed here. As a non-limiting example, server 240 may comprise a web (network) server that has hardware and software that helps to deliver content that is accessible through communication network 204. A function of a web server is to deliver web pages on request for clients (eg user device 208). This means distributing one or more documents to the 208 user device via any type of markup language (eg hypertext markup language (HTML), extensible markup language (XML) and the like) and any additional content that may be included by a document such as images, style sheets and scripts. In some embodiments, server 240 may comprise the ability to receive requests or other data from user device 208, where requests are formatted according to a mutually agreed protocol (e.g., hypertext transport protocol (HTTP), protocol (RTP) and secure variants thereof, or similar). The request may comprise a request for a specific resource using HTTP and may further comprise a request to perform one or more operations before providing a response for it. Upon receiving a request, server 240 may locate the requested resource and/or perform one or more operations before returning a response to the requesting user device 208 using the same or a different type of protocol. Communications between server 240 and user device 208 can be facilitated by communication modules 228, 244.
[00030] The communication module 228 of the user device 208 may comprise a web browsing application (eg Internet Explorer®, Mozilla Firefox, Safari®, Google Chrome®, or the like), which enables the user device 208 to format an or more requests and send the requests to server 240. Communication module 244 of server 240 may comprise functionality required to execute one or more scripts and respond to requests from user device 208. In some embodiments, a communication module 244 may be provided on a dedicated web server, while the other server components 240 (eg, analysis module 116, and reporting module 124) are provided on a different server that is connected to the web server. In any configuration the analysis module 116 and the reporting module 124 can be made available to the user device 208 through the communication module 244 and the reporting module 124 can be configured to send reports or reporting results back to the device. user 208 through the communication module 244.
[00031] Communication network 204 may include wired and/or wireless communication technologies. The Internet is an example of the communication network 204 which constitutes an Internet Protocol (IP) network consisting of various computers, computing networks and other communication devices located throughout the world, which are connected through various telephone systems. and other devices. Other examples of communication network 204 include, without limitation, a standard legacy full-service telephone system (POTS), an integrated services digital network (ISDN), a public switched telephone network (PSTN), a local area network (LAN), a wide area network (WAN), a section initiation protocol (SIP) network, a cellular network, and any other type of packet-switched or circuit-switched network known in the art. In addition, it can be appreciated that the communication network 204 need not be limited to any type of network and, instead, may consist of numerous different networks and/or types of network.
[00032] Communication module 228 of user device 208 may be stored in memory 216 along with other instructions, scripts, etc. which can be referred to generically as applications. In some embodiments the O/S operating system 220 and test applications 224 may be provided as instructions in memory 216 and these instructions may be executed by processor 212 of user device 208. In some embodiments processor 212 may comprise one or more integrated circuits (ICs), a digital signal processor (DSP), an application specific IC (ASIC), a controller, a programmable logic device (PLD), a logic circuit, or the like. In particular, processor 212 may comprise a general purpose programmable controller and processor for executing applications, application programming, or instructions stored in memory 216. Under at least some embodiments, processor 212 may comprise multiple processor cores and/ or implement multiple virtual processors. In some embodiments, processor 212 may comprise a plurality of physically different processors.
[00033] Memory 216 may comprise any device or collection of devices configured for short-term or long-term storage of information, e.g. instructions, data, etc. As some non-limiting examples memory 216 may comprise random access memory (RAM), any known variant of (RAM), solid state memory, any known variant of solid state memory, read only memory (ROM) or any variant known name of ROM, or combinations thereof. As can be appreciated, memory 216 can not only be used to store applications for execution by processor 212, but can also be used as processing memory (eg cache, RAM, etc.) by processor 212. Alternatively or in addition, memory 216 may comprise a hard disk drive or the like.
[00034] O/S 220 can match any type of known operating system or system platform. In some modalities the type of O/S 220 used may depend on the nature of the user device 208. AO/S 220 represents a high-level application that allows a user to browse and access the various other applications stored in memory 216. AO/S 220 it may also comprise one or more application programming interfaces (APIs) or rules such as APIs that allow applications to access and/or control the various other software or hardware components of the user device 208.
[00035] The test application 224 may enable the user device 208 to perform one or more functions in connection with testing the object under test 104. As a non-limiting example the test application 224 may understand functionality similar to the image conversion module 112. , analysis module and/or reporting module 124. More specifically, test application 224 may comprise an application on user device 208 that allows a user 132 to operate image capture device 108 and capture one or more images of the object. under test 104. Test application 224 may also be configured to format the images captured by image capture device 108 for transmission to server 240 and/or analysis by analysis module 116. In some embodiments, analysis module 116 may also be included in test app 224. Thus, a test app 224 can also be configured to determine physical metrics regarding the o object under test 104 based on the images obtained from it.
[00036] In addition to the components described above, the user device 208 may further comprise the user interface 128, a power source 232 and a network interface 236. In some embodiments the user interface 128 may comprise one or more inputs of user (eg microphone, camera, buttons, switches, scanners, etc.), one or more user outputs (eg speaker, lights, display screens, etc.) and/or one or more combined user input/output (eg touchscreen display, illuminated button, etc.).
[00037] The power source 232 may comprise a power source dedicated to the user device 208, such as a battery, one or more capacitors, and the like. Power source 232 may additionally or alternatively comprise power adapters for converting A/C power from an external power source to DC power that is utilized by the various components of user device 208.
[00038] The network interface 236 may comprise one or more device ports, controllers, drivers, etc., which enable the user device 208 to connect with, and communicate through the communication network 204. In some embodiments the network interface 236 may comprise components that facilitate wired and/or wireless connections with the communication network 204. As some non-limiting examples, the network interface 236 may comprise one or more wireless interfaces that facilitate cellular communications with a cellular communication network, an 802.11X compliant interface or an Internet port or similar.
[00039] In addition to facilitating communications with communication module 244, communication module 228 of user device 208 can facilitate other types of communications such as voice, video, text, email, multimedia, etc., with other user devices . As an example, the communication module 228 may comprise functionality that enables the user device 208 to make voice calls, video calls, send and receive messages from the short message service (SMS), send and receive messages from the multimedia service (MMS) , and the like.
[00040] Referring now to Fig. 3, details of a data structure 300 that can be maintained in an object database 120 will be described in accordance with embodiments of the present disclosure. It should be appreciated that data structure 300 can be completely maintained in object database 120 and can be retrieved from it. In some embodiments a structured language such as Structured Query Language (SQL) can be followed by the analysis module 116 to perform database queries and retrieve information from the data structure 300. Alternatively or in addition, a data structure 300 may be held in memory of server 240 and/or memory 216 of user device 208.
[00041] In some embodiments the data structure 300 may comprise a plurality of data fields that can be used by the analysis module 116 to analyze the object under test 104, (an image thereof), to determine whether the object under test 104 is it good, bad, or has some other characteristic that can affect its operation. Fields that may be included in data structure 300 may comprise, without limitation, a belt identifier field 304, a rib count field 308, a rib profile field 312, an image overlay data field 316, a thresholds field 320, a wear history field 324, and a replacement history field 328.
[00042] The belt identifier field 304 can be used to store information to identify any type of object under test 104. If the object under test 104 is a belt, such as a serpentine belt, then the belt identifier field 304 may comprise one or more of a part number, a model number, a Universal Purchase Code (UPC), a part name, a model name, and any other information that can be used to identify the belt. As can be appreciated, if the object under test 104 is not a belt, then any other information specific to that object can be stored in the belt identifier field 304.
[00043] Continuing the example of a belt being the object under test 104, the rib count field 308 can be used to store information identifying the number of ribs on a particular type of belt. Alternatively or in addition, the rib count field 308 may store information identifying the number of grooves in the belt or any other aspect that maintains the belt in a normal operating position.
[00044] The rib profile field 312 can be used to store information identifying physical characteristics of the ribs or spaces between the ribs on the belt. For example, rib profile field 312 can store information identifying whether the ribs are V-shaped, square-shaped, rounded, etc. The rib profile field 312 may also store information relating to fabricated distances between ribs, and the like. Any information that identifies a good (expected) profile or a bad (unexpected) profile for the belt can be kept in the rib profile field 312.
[00045] Image overlay data field 316 can be used to store any image data that is used for overlay purposes. For example, image overlay data field 316 may comprise a number of images of a belt of a particular type. Images can range from images of good belts to images of bad belts. Image overlay data field 316 may also comprise one or more images of an object under test 104. Image overlay data field 316 may also comprise results of an image overlay process after an image of the object under test 104 is compared to one or more stored example images of belts with known physical properties. More specifically, image overlay data field 316 can be used to store data identifying the results of an image overlay process (e.g., which among a plurality of stored example images the image of the object under test 104 is most similar and physical properties of the example stored image which is most similar to the image of the object under test 104).
[00046] Thresholds data field 320 may comprise one or more thresholds for determining whether an object under test 104 is good, bad, or some other state. In particular, the threshold field 320 may comprise information regarding one or more physical properties of the belt that is matched, exceeded, etc., by the object under test 104, results in a determination that the object under test 104 is bad, good , etc.
[00047] Wear history data field 324 can be used to store information relating to historical wear observations for the specific object under test 104 or other objects of similar types. For example, if a belt of a particular type is observed to have an excessive amount of wear on one or more specific ribs, eg inner ribs, outer ribs, a particular rib, outer edges, etc., then such information may be maintained in the wear history field 324. The wear history field 324 is useful for manufacturers of objects similar to the object under test 104, so the wear history field can be kept analyzed and hopefully used to produce better objects in the future. Alternatively or in addition, data in the wear history field can be used to fine-tune the equipment that is using the operating belt, thereby allowing the equipment (eg vehicle, engine, machine, etc.) to continue to operate optimally when the belt wears out.
[00048] The replacement history field 328 can be used to store information regarding the replacement history of a particular object under test 104, similar types of objects to the object under test. In some embodiments the replacement history field 328 can identify the amount of usage that a particular object under test 104 experienced before receiving a bad gradation and having to be replaced. This information can be shared between the manufacturer of the object under test 104 and the consumer of the object under test 104. Additionally, this information can be used to inform maintenance personnel that if an object of a particular type has experienced a certain amount of use, then it might at the very least be time to subject that object to one or more of the testing procedures described here.
[00049] Referring now to Fig. 4, details of a test procedure will be described in accordance with embodiments of the present disclosure. The method described here will be directed towards the particular situation where a belt is the object under test 104; however, those skilled in the art will appreciate that one or more of the steps described here can be used to test objects of other types for any type of wear.
[00050] In a belt testing method, the first step may be to mark the belt with one or more indicators that create visible contrast over one or more aspects of the belt (step 404). In some embodiments the belt may be marked during its manufacture at one or more predetermined locations, and this may be all that is required during the belt marking step. In other embodiments, the belt may not be manufactured under any particular brand that will be used to test the belt. Instead, the belt can be marked just before taking one or more images of the belt. In some embodiments, the belt can be marked with a marker, pen, marker, or similar, to create one or more contrasting objects on the belt. The types of aspects of the belt that are marked may depend on the type of wear being tested on the belt. If the belt is designed to wear on its ribs, then one, two, three, four or all of the belt ribs can be marked at a common point on the belt with a marker or similar. Alternatively, the strap may be marked in its valleys or at other predetermined locations on the strap. In other embodiments, the belt can have a prefabricated look that only becomes visible when the belt wears out. Specifically, the belt may comprise a number of layers of EPDM materials and one or more of these layers may comprise a different color than the other layers on the belt. Consequently, when the belt wears out, the layer of the other color can become exposed, and this can be sufficient to mark the belt in accordance with embodiments of the present disclosure.
[00051] After the belt has been marked, regardless of the way in which it has been marked, the method proceeds with the image capture device 108 which takes one or more images of the belt at the locations in which it was marked (step 408). As discussed above, images can be taken using still cameras, video cameras, etc.
[00052] The data from the one or more images, (eg pixel data), are then transmitted to the analysis module 116 for analysis with them (step 412). In some embodiments, the image data can comprise any type of known image file formats, that is, standardized mechanisms for organizing and storing digital images. Image files can be composed of either pixels, vector (geometric) data, or a combination of the two. Whatever the format, files are rasterized to pixels when presented on most graphic displays. The pixels that make up an image are ordered as a grid of columns and rows, each pixel consisting of numbers that represent magnitudes of brightness and color for a specific location within the image. It should be appreciated that any type of image file can be used to transport the image data to the analysis module. Non-limiting examples of such image files include JPEG, Exif, TIFF, RAW, PNG, GIF, BMP, PPM, PGM, PBM, PNM, WEBP, CGM, Gerber format (RS-274X), SVG, PNS, JPS or any other type of image format known or yet to be developed.
[00053] The transmission step can also comprise formatting the image files in such a way that the image data can be communicated to the analysis module 116 via the communication network 204. Consequently, the image files can be packaged and formatted for transmission over communication network 204, (e.g., within one or more communication packets transmitted over the Internet).
[00054] While traveling to the analysis module 116, images can be formatted for edge detection via the image conversion module 112 (step 416). Depending on the location of the image conversion module 112, this step can be performed either before or after transmitting the image data. For example, if the image conversion module 112 and the analysis module 116 reside on the same device, then transmission can take place prior to formatting. However, if the image conversion module 112 and the analysis module 116 reside on different devices, then transmission can take place after formatting.
[00055] In this formatting step, one or more image processing techniques can be employed. As some examples, image data can be converted to grayscale (if the images were originally taken as color images), one or more filters can be applied to the image data, (eg applying Gaussian defocus to smooth out any image noise and filter out other unwanted image data), the image data can be converted to binary data, such as a black and white image, (eg by applying a binary threshold algorithm to the scale image from gray to convert any pixel above the threshold to one color, (eg black), and convert any pixel equal to or below the threshold to the other color, (eg white)).
[00056] After the image has been properly formatted, the method continues with the analysis module 116 locating one or more contours (eg identifying objects) within the image data (step 420). In some embodiments, an analysis module 116 can be configured to perform a skillful edge detection process. Other types of edge or bubble detection techniques can be employed without departing from the scope of this disclosure. Examples of such techniques include without limitation Canny-Deriche, Differential, Sobel, Prewitt, Roberts Cross, point of interest detection, corner detection, Harris operator, Shi and Tomasi, contour curvature, SUSAN, FAST, Laplacian de Gau - ssian (LoG), Gaussian Difference (DoG), Determinant of Hessian (DoH), maximally stable outer regions, PCBR, Hough transform, structure tensor, affine invariant aspect detection, affine adaptation, Harris affinity, affinity Hessian, and so on.
[00057] In step 424 it can be determined whether any threshold adjustments are required. Specifically, based on the contour detection results of step 420, all contours within the image data can be located. If a number of aspects, for example, ribs, are detected, and that number of detected aspects does not equal the expected number of aspects, then it may be necessary to adjust the one or more thresholds that were used during image formatting (step 428) . As an example, the thresholds used to convert an image to a black and white image can be adjusted using a bidirectional solver for thresholds that produce the expected number of aspects, (eg, ribbing).
[00058] After the thresholds have been adjusted and the expected number of aspects have been detected in the image, then the method can be allowed to proceed from the threshold adjustment mesh to a step where the image data is scaled and checked for errors ( step 432). In particular, this step may be where the analysis module 116 converts the image pixel data to fixed dimension information (e.g. the actual belt dimension is determined based on the belt dimension in the image, the actual aspect dimension , (eg ribs), is determined based on the dimension of the aspects in the image, etc.).
[00059] In some embodiments, the dimension of the marked areas on the belt, (eg, rib tips), can be determined in physical units, (eg, area, threshold dimensions, etc.) (step 436). In this step the uppermost and lowermost edges detected on the belt can be used to determine the step ratio. This scaling relationship can be determined in various situations since the type of belt being tested could be known and communicated to the analysis module 116, the width of that type of belt can be known to an analysis module 116 through the bank of object data 120. The known length of the belt from the object database 120 can be equated to the detected belt width in the image (which is in pixels), and this ratio of physical units to pixel units can be used as a physical unit-to-pixel conversion ratio for the other objects in the image. Alternatively, rather than using known belt width data from the object database 120, the user 132 who obtains the belt image can also obtain a measurement of the belt width being tested and can provide that data to the analysis module 116 for purposes of converting pixel values to physical values. In yet another alternative or additional implementation, an image can be captured of an object of a known or standard dimension, (eg, ruler, penny, quarter (coins), dollar bill, etc.), and the image of the object. a known or standard dimension can be used to convert pixel units to physical units. The object of a known or standard dimension can be included in the belt image, or it can be taken in an image separate from the belt image.
[00060] Another analysis step that can be performed by the analysis module 116 is an image overlay step where the image of the belt being tested is overlaid against one or more images of the belts that have known physical dimensions (step 440) . An alternative image overlay step may involve the analysis module 116 covering the calculated dimensions of the tagged aspects, (e.g., rib width dimensions) against the original image, such that the user 132 can check to ensure that the Analysis module 116 accurately identified the aspects of the belt from the image. Aspects analyzed by the analysis module 116 may include dimension, shape, and/or distance between marked portions of the belt and/or dimension, shape and/or distance between unmarked portions of the belt. An example of an overlaid image that can be used in accordance with embodiments of the present disclosure is outlined in Fig. 6 where calculated areas of certain aspects are overlaid against the actual image.
[00061] Depending on the results of image analysis the analysis module 116 can determine that the object, (for example, belt), is either good or bad and etc., and based on this determination the analysis module 116 can generate an or more alarm conditions relative to the object's condition (step 444). Alternatively or in addition, the analysis module 116 can generate other statistics regarding the belt, such as the predicted usage history based on detected wear, proposed next steps (eg, recheck the belt at time X, replace the belt, and check the new belt at time Y, abnormal wear detected, therefore check other components that interface with the belt, etc.).
[00062] Alarm conditions or other test results can then be reported back to user 132 (step 448). Analysis reports may be transmitted in one or more communication packets over communication network 204, unless analysis module 116 resides in user device 208 or some other device that is located along with user interface 128.
[00063] Referring now to Fig. 5, an example of a contrast threshold solution routine will be described in accordance with embodiments of the present disclosure. Specifically, the analysis module 116 in the threshold adjustment steps of steps 424 and 428 can automatically perform this particular routine. In the example outlined in Fig. 5 an image being analyzed may comprise an image of a grayscale value of 256, where each pixel in the image may include a value that lies in magnitude from zero to 255, for example, where 0 = black and 255 = white. There is an optimal threshold value (a grayscale value somewhere between 0 and 255) that results in the best edge detection via the analysis module 116. In some embodiments it may be possible to resolve both directions from white to black. There is a threshold value (grayscale) at which the rib count is equal to the target. Tmim equals the point closest to black where the detected rib count equals the target rib count. Tmax equals the closest white point where the detected rib count equals the target count. In some embodiments the values of Tmin and Tmax can be solved using a binary method. Once resolved, the Tmin and Tmax values can be used to determine an optimal threshold, which equals the average gray scale value of Tmin and Tmax. An alternative, or perhaps in addition, algorithm for determining the threshold value will be described in further detail with reference to Figs. 9 to 11.
[00064] Referring now to Fig. 6, further details of an overlay image and example of a belt 600 will be described according to embodiments of the present disclosure. The belt 600 image can be converted to binary image and analyzed by the analysis module 116. Based on this analysis the detected edges or contours can be overlaid over the real image to produce the overlay image 600. As can be seen, the types of aspects that can be detected by the analysis module 116 include one or more markings 608a-N on the ribs 612a-N of the belt, a first edge 616, e.g., the uppermost edge of the belt, and a second edge 620, e.g. , the deepest edge of the strap. Tags 608a-N can be detected within an zoom window 604 that identifies the region of interest within the image. In some embodiments the zoom window can be automatically generated and positioned around the plurality of landmarks 608a-N. In other embodiments, a user 132 may interact with test application 224 to manually identify zoom window 604 to help analysis module 116 focus its analysis on the contours within zoom window 604.
[00065] Specifically, part of test application operation 224 may instruct user 132 to select zoom window area 604 such that the plurality of landmarks 608a-N are within zoom window 604 and the first and the second edges 616, 620 are also within the zoom window 604. This can also help an analysis module 116 create a more accurate conversion of pixel values to physical values. In particular, edges 616, 620 may become relatively easy to distinguish from markings 608a-N since edges 616, 620 span the entire width of approach window 604 while markings 608a-N do not. Consequently, the analysis module 116 can use this fact and a simple rule that establishes if a contour is detected as crossing the entire width of the zoom window 604, then that contour can be treated as one of the edges 616, 620, while a contour is detected as not traversing the entire width of zoom window 604, so that contour can be treated as a marking 608.
[00066] In some embodiments, the analysis module 116 can calculate the top and bottom Y values for the belt, (eg the belt width), in pixel units and then an ideal width from the count of rib and rib spacing can be computed based on at least some information obtained from the object database 120. Hereafter, the following equation can be used to calculate a scale factor to convert the pixel value to physical values:

[00067] In some embodiments, the following equation can also be used to determine belt width based on a calculated rib spacing and distance between rib centers:

[00068] Hereinafter an error checking step, (eg step 432), can be performed where the pixel position of the nth rib from the top is determined according to the following equation:

[00069] The center of each detected rib can then be calculated according to the above equation and the image can be rejected if the difference between the detected rib position and the expected rib position is too large, for example, it exceeds a threshold predetermined.
[00070] Figs. 7A-D outline examples of an image of an object under test 104 during various stages of image analysis. Specifically, Fig. 7A outlines an original image of a belt that may have originally been in grayscale or in color. Fig. 7B outlines a grayscale conversion of the image in Fig. 7A if the image in Fig. 7A was originally in color. Fig. 7C delineates the image after a filter (eg Gaussian defocus) has been applied to the image in Fig. 7B. Fig. 7D outlines the image in Fig. 7C after the image has been converted to a black-and-white image. It is this image that can be analyzed by the analysis module 116 to detect edges in the image, (for example, marked aspects of the belt).
[00071] Fig. 8 outlines an example of a user interface presentation that can be provided to a user 132 after the image has been analyzed by the analysis module 116. Specifically, the presentation can be provided over the interface of user device 128 of user device 208 and the display may comprise a first display area 808 where the original image within an zoom window 604 is outlined, a second display area 812 where the analyzed black-and-white image is outlined ( or possibly where an overlay image is outlined), a submit button 816, a result display portion 820, an image capture button 824, an analyze button 828, and an adjustments button 832.
[00072] As can be appreciated, the submit button 816 can cause the user device 208 to transmit the image from the first presentation area 808 to the analysis module 116 and the overlay image 600 can be presented in the second area of presentation 812.
[00073] The results presentation portion 820 can be used to present statistical results, recommended remediation actions, wear history data, and other data obtained from image analysis and/or data obtained from the object database 120. The result presentation portion 820 may also present any assumptions made by the analysis module 116 analyzing the object under test 104.
[00074] The 824, 828, 832 buttons can be used to make the 224 test app perform one or more actions associated with the button. Specifically, the image capture button 824 can cause the user device 208 to activate the image capture device 108 and capture an image of the object under test 104. The analyze button 828 can cause the user device 208 to format and send the image captured for an analysis module 116, for testing. Adjustments button 832 may enable a user 132 to configure test application adjustments 224 and perform other administrative tasks.
[00075] Although not delineated, the interface may also comprise an area where a user is allowed to provide feedback information to the analysis module 116, (eg other comments related to wear of the object under test 104, comment on accuracy of the analysis, comment on the accuracy of the assumptions, etc). There may also be a dialog box that enables a user 132 to directly access the object database 120. There may also be a window presented to the user 132 after the image of the object under test 104 is captured, and within this window the user 132 can be left to select the size and orientation of the zoom window 604 with respect to the image. Other types of dialog boxes, buttons, etc., may also be made available to a user 132 without departing from the scope of the present disclosure.
[00076] Referring now to Figs. 9-11, an alternative algorithm for determining a threshold value will be described in accordance with embodiments of the present invention. This particular algorithm can be used as an alternative to the algorithm discussed in connection with Fig. 5 or it can be used to complement the algorithm discussed in connection with Fig. 5. Specifically, the threshold selection algorithm starts by sampling an image of a belt (or other type of object) using a gray threshold between two extremes, (eg 0 to 256 or any other discrete levels between two binary colors). At each sampled threshold the number of ribs on the belt is detected (Fig. 9). The analysis module 116 can be configured to scale through each candidate threshold and determine a number of ribs detected at each candidate threshold until the entire plot of Fig. 9 is created.
[00077] After the number of ribs has been registered for each threshold level, the analysis module 116 can find the rib count mode (for example, the rib count value that occurs many times more, obtained from of the test fit summarized in Fig. 9). The rib count data can then be incorporated into another data set that correlates a rib count to a number of occurrences for that rib count (Fig. 10).
[00078] In the example outlined here, the six ribs were determined to have the most occurrences of any other rib count. Consequently, the dataset from Fig. 9 can then be filtered with the most frequent rib count from Fig. 10 to result in a filtered dataset that is outlined in Fig. 11. The threshold values in Fig. 11 represent only the threshold values obtained for the rib count of six (for example, the most frequently occurring rib count). The analysis module 116 can use the filtered data of Fig. 11 to determine a more accurate threshold value for converting a grayscale image to a black and white image. Specifically, the analysis module 116 can determine a median or average threshold value from the filtered data. By analyzing the filtered data, an analysis module 116 will be configured to ignore certain off-plot points (eg thresholds that result in the detection of 15 ribs) and thus obtain a more useful threshold value.
[00079] It should be appreciated that the above-described algorithm for determining a threshold value can be used in determining a gray scale threshold value, a color threshold value or a threshold value for any other type of pixel data. In other words, the algorithm described above is not designed to be limited to converting grayscale images to black and white images.
[00080] In the foregoing description for the purposes of illustration, methods have been described in a particular order. It should be appreciated that in alternative embodiments the methods may be performed in a different order than described. It should also be appreciated that the methods described above can be performed by hardware components or can be configured into machine-executed instruction sequences, which can be used to make a machine, such as a general purpose processor, or a processor. special purpose (GPU or CPU) or logic circuits programmed with the instructions to perform the methods (FPGA). These machine-executed instructions may be stored on one or more machine-readable media such as CD-ROMs or other types of optical disks, floppy disks, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, flash memory, or other types of machine-readable media suitable for storing electronic instructions. Alternatively, methods can be performed by a combination of hardware and software.
[00081] Specific details have been provided in the description to provide an understanding of the modalities throughout. However, it will be understood by someone of ordinary skill in technique that the modalities can be made practical without these specific details. For example, circuits can be shown in block diagrams so as not to obscure modalities in unnecessary detail. In other cases, well-known circuits, processes, algorithms, structures, and techniques can be shown without unnecessary detail, to avoid obscuring the modalities.
[00082] It is also noted that modalities have been described as a process that is delineated as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart can describe operations as a sequential process, several of the operations can be performed in parallel or at the same time. In addition, the order of operations can be rearranged. A process is finished when its operations are complete, but it could have additional steps not included in the figure. A process can correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process matches a function, its termination corresponds to a return from the function to the calling function or main function.
[00083] Furthermore, modalities can be implemented by hardware, software, firmware, middleware, microcode, hardware description languages or any combinations of them. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks can be stored on a machine-readable medium, such as a storage medium. Processors can carry out the necessary tasks. A code segment can represent a procedure, function, subprogram, program, routine, subroutine, module, software package, class, or any combination of instructions, data structures, or program statements. A code segment can be coupled to another code segment or to a hardware circuit passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. can be passed, provided or transmitted through any suitable device including memory sharing, message passing, symbol passing, network transmission, etc.
[00084] Although illustrative embodiments of the disclosure have been described in detail, it is to be understood that innovative concepts may otherwise be employed and configured in a variety of ways, and that the appended claims are designed to be imagined to include such variations except as limited by the prior art.
权利要求:
Claims (20)
[0001]
1. A method for measuring a wear condition of a belt, characterized in that it comprises: obtaining a raster image of the belt, the raster image including image data for at least one aspect of the belt including at least a portion of a non-rib. co-located with a belt edge; analyze the raster image of the belt; based on analysis of the raster image of the belt, determine the physical dimensions of at least one aspect of the belt; based on the determined physical dimensions of at least one aspect of the belt, determining a wear condition for the at least a portion of the belt; prepare a report that includes the results of the belt raster image analysis and the wear condition determined for the belt; and transmit the report to a user device operated by at least one interested party.
[0002]
2. Method according to claim 1, characterized in that the user device to which the report is transmitted is the same device that captured the raster image of the belt.
[0003]
3. Method for measuring a wear condition of a belt, characterized in that it comprises: obtaining a raster image of the belt, the raster image including image data for at least one aspect of the belt; analyze the raster image of the belt; based on analysis of the raster image of the belt, determine the physical dimensions of at least one aspect of the belt; based on the determined physical dimensions of at least one aspect of the belt, determine a wear condition for the belt; prepare a report that includes the results of the belt raster image analysis and the wear condition determined for the belt; and transmit the report to a user device operated by at least one interested party; and wherein the at least one aspect of the belt comprises one or more belt ribs, wherein analyzing the raster image of the belt comprises: converting a first image of the belt to a binary pixel data image of the belt, wherein converting the first image to black-and-white image is accomplished by: determining a minimum threshold that corresponds to a minimum pixel value, where a given number of belt ribs equals an expected number of belt ribs; determining the maximum threshold that corresponds to the maximum pixel value, where a number that determines belt ribs is equal to the expected number of belt ribs; and determining an average of the minimum threshold and the maximum threshold and using the determined average as a threshold to convert the first image to the binary pixel data image.
[0004]
4. Method according to claim 3, characterized in that the first image comprises a color image and in that the threshold determined for converting the first image to the binary pixel data image corresponds to threshold values for values of red, green and blue pixel color.
[0005]
5. Method according to claim 3, characterized in that the first image comprises a grayscale image and in that the determined threshold for converting the first image to the binary pixel data image corresponds to a value of pixel in gray scale.
[0006]
6. Method according to claim 1, characterized in that the physical dimensions of at least one aspect of the belt are determined by determining a conversion value that when multiplied by a number of pixels corresponds to a physical dimension, in which the conversion value which is determined by calculating the number of pixels between a first detected belt edge and a second detected belt edge.
[0007]
7. Method according to claim 6, characterized in that the detected first edge and the detected second edge are detected as contours that cross the entire length of an approach window placed around at least one aspect of the belt and the first and second belt edges.
[0008]
8. Method according to claim 4, characterized in that the physical dimensions of at least one aspect of the belt are determined by determining a conversion value that when multiplied by a number of pixels corresponds to a physical dimension, in which the conversion value is determined by calculating the number of average pixels between adjacent ribs.
[0009]
9. Method according to claim 4, characterized in that the first detected edge and the second detected edge are detected as the inner edges of the outermost regions.
[0010]
10. Method according to claim 1, characterized in that the wear condition for the belt comprises at least one of good, bad and questionable, and in which the wear condition for the belt is determined by comparing a detected distance between a first aspect of the belt and a second aspect of the belt and comparing the sensed distance with an expected distance.
[0011]
11. Method according to claim 10, characterized in that the expected distance is retrieved from an object database comprising data regarding a plurality of different types of belts, wherein the data for each of the The plurality of different types of belts include the one or more of belt identifier, rib count, rib profile, thresholds, wear history, and replacement history.
[0012]
12. Method according to claim 1, characterized in that it further comprises modifying an operational parameter of equipment that is using the belt based on the belt wear data.
[0013]
13. Method according to claim 3, characterized in that the user device to which the report is transmitted is the same device that captured the raster image of the belt.
[0014]
14. Method according to claim 3, characterized in that the physical dimensions of the at least one aspect of the belt are determined by determining a conversion value that when multiplied by a number of pixels corresponds to the physical dimension, in that the conversion value is determined by calculating the number of pixels between a detected first belt edge and a detected second belt edge.
[0015]
15. Method according to claim 14, characterized in that the detected first edge and the detected second edge are detected as contours that cross an entire length of an approach window placed around at least one aspect of the belt and the first and second detected edges of the belt.
[0016]
16. Method according to claim 3, characterized in that the wear condition for the belt comprises at least one of good, bad, and questionable and in which the wear condition for the belt is determined by comparing a distance detected between a first aspect of the belt and a second aspect of the belt and compare the sensed distance with an expected distance.
[0017]
17. Method according to claim 16, characterized in that the expected distance is retrieved from an object database comprising data on a plurality of different types of belts, wherein the data for each of the plurality of different types of belts include one or more of belt identifier, rib count, rib profile, limits, wear history, and replacement history.
[0018]
18. The method of claim 1, wherein the at least one aspect of the belt comprises one or more belt ribs, wherein analyzing the raster image of the belt comprises: converting a first image of the belt to an image of belt binary pixel data, where the conversion of the first image to the black and white image is performed by: determining a minimum threshold that corresponds to a minimum pixel value where a given number of belt ribs corresponds to an expected number of belt ribs; determining a maximum threshold corresponding to a maximum pixel value where a determining number of belt ribs corresponds to the expected number of belt ribs; and averaging the lower bound and the upper bound and using the determined average as a threshold to convert the first image to the binary pixel data image.
[0019]
19. Method according to claim 18, characterized in that the first image comprises a color image and in that the threshold determined for converting the first image to the binary pixel data image corresponds to the threshold values for the red, green, and blue pixel color values.
[0020]
20. Method according to claim 18, characterized in that the first image comprises a grayscale image and in that the determined threshold for converting the first image to the binary pixel data image corresponds to a value of grayscale pixel.
类似技术:
公开号 | 公开日 | 专利标题
BR112014005078B1|2021-07-06|method for measuring a belt wear condition
WO2019137196A1|2019-07-18|Image annotation information processing method and device, server and system
US8121400B2|2012-02-21|Method of comparing similarity of 3D visual objects
US9710926B2|2017-07-18|Image processing of a retail shelf area
US20130279794A1|2013-10-24|Integration of automatic and manual defect classification
JP5484392B2|2014-05-07|System and method for determining quality of polarizing film original fabric
CN106530271A|2017-03-22|Infrared image significance detection method
US20140112531A1|2014-04-24|Image processing apparatus and method for detecting transparent object in image
CN106327546A|2017-01-11|Face detection algorithm test method and device
US9098914B2|2015-08-04|Enhanced analysis for image-based serpentine belt wear evaluation
US9037518B2|2015-05-19|Classifying unclassified samples
US20140315569A1|2014-10-23|Positioning System in a Wireless Communication Network
JP5647999B2|2015-01-07|Pattern matching apparatus, inspection system, and computer program
US10062155B2|2018-08-28|Apparatus and method for detecting defect of image having periodic pattern
EP3264358B1|2020-12-02|Determining image forensics using an estimated camera response function
JP6818263B2|2021-01-20|Fracture surface analysis device and fracture surface analysis method
CN105824871B|2019-07-30|A kind of picture detection method and equipment
BR102020006758A2|2022-02-08|METHOD, SYSTEM AND DEVICE TO DETERMINE THE LEVEL OF INTERFERENTS IN A BLOOD SAMPLE
WO2021217854A1|2021-11-04|False positive filtering method, device, equipment, and storage medium
KR102026427B1|2019-09-27|Measurement System for concrete crack using smartphone
CN107247662B|2019-10-18|Software defect detection method and device
US20210174126A1|2021-06-10|Image analysis apparatus and image analysis method
Choi2017|Usability of docker-based container system health monitoring by memory dump visualization
CN114140614A|2022-03-04|Protection line detection method and device, electronic equipment and storage medium
CN112730427A|2021-04-30|Product surface defect detection method and system based on machine vision
同族专利:
公开号 | 公开日
RU2014113308A|2015-10-20|
JP2014533349A|2014-12-11|
MX2014002692A|2014-09-22|
CA2847825C|2020-12-22|
AU2012304678A1|2014-03-13|
CA2847825A1|2013-03-14|
RU2582067C2|2016-04-20|
US20130058560A1|2013-03-07|
US8755589B2|2014-06-17|
WO2013036547A1|2013-03-14|
KR101928833B1|2018-12-13|
KR20140073520A|2014-06-16|
CN103917987B|2017-08-29|
EP2754091A1|2014-07-16|
EP2754091B1|2018-01-24|
CN103917987A|2014-07-09|
AU2012304678B2|2017-06-08|
EP2754091A4|2015-05-06|
JP6121425B2|2017-04-26|
BR112014005078A2|2017-04-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US5042079A|1988-08-12|1991-08-20|Casio Computer Co., Ltd.|Method of recording/reproducing data of mesh pattern, and apparatus therefor|
IL99823D0|1990-11-16|1992-08-18|Orbot Instr Ltd|Optical inspection method and apparatus|
US5600574A|1994-05-13|1997-02-04|Minnesota Mining And Manufacturing Company|Automated image quality control|
US5796868A|1995-12-28|1998-08-18|Cognex Corporation|Object edge point filtering system for machine vision|
JPH10302066A|1997-04-23|1998-11-13|Toshiba Corp|Recognition device|
DE19840081B4|1998-09-03|2004-08-05|Contitech Transportbandsysteme Gmbh|System and method for monitoring a layer of an object subject to wear, in particular the top layer of a conveyor belt|
US6301373B1|1998-10-01|2001-10-09|Mcgill University|Paper quality determination and control using scale of formation data|
JP2001268509A|2000-03-17|2001-09-28|Omron Corp|Image recording device and system|
JP4693225B2|2000-11-06|2011-06-01|株式会社東芝|Manufacturing line automatic quality control method and apparatus, storage medium, and automatic quality control program|
US7863552B2|2001-07-06|2011-01-04|Palantyr Research Llc|Digital images and related methodologies|
TW518645B|2001-09-24|2003-01-21|Powerchip Semiconductor Corp|Method and system of automatic wafer manufacture quality control|
AU2003235679A1|2002-01-14|2003-07-30|Carnegie Mellon University|Conveyor belt inspection system and method|
JP3870140B2|2002-08-02|2007-01-17|新日本製鐵株式会社|Driving transmission belt inspection method|
US7660440B2|2002-11-07|2010-02-09|Frito-Lay North America, Inc.|Method for on-line machine vision measurement, monitoring and control of organoleptic properties of products for on-line manufacturing processes|
US7068817B2|2002-11-07|2006-06-27|Mcmaster University|Method for on-line machine vision measurement, monitoring and control of product features during on-line manufacturing processes|
US7146034B2|2003-12-09|2006-12-05|Superpower, Inc.|Tape manufacturing system|
JP2005351415A|2004-06-11|2005-12-22|Bando Chem Ind Ltd|Power transmitting belt|
JP4437714B2|2004-07-15|2010-03-24|三菱電機株式会社|Lane recognition image processing device|
DE102005055655A1|2005-11-22|2007-05-31|Siemens Ag|Conveyor belt condition determining device for e.g. mining industry, has evaluating device designed such that it compares two-dimensional images of belt sections detected by detection device with reference images assigned to sections|
US20070260406A1|2006-05-03|2007-11-08|Data I/O Corporation|Automated location system|
US7540374B2|2006-08-24|2009-06-02|Frost Links, Inc.|Chain wear monitoring device|
JP4910769B2|2007-02-28|2012-04-04|Jfeスチール株式会社|Pipe quality control method and manufacturing method|
FR2922671B1|2007-10-17|2010-03-12|Valeo Vision|METHOD FOR THE AUTOMATIC DETERMINATION OF COEFFICIENT OF A SLOPE ON THE POINT BEING ABORDIED BY A MOTOR VEHICLE AND ASSOCIATED DEVICE|
JP2009156467A|2007-11-28|2009-07-16|Mitsuboshi Belting Ltd|Power transmission belt|
JP2009281780A|2008-05-20|2009-12-03|Chugoku Electric Power Co Inc:The|Abrasion inspection method|
US8226185B2|2008-09-11|2012-07-24|Xerox Corporation|Drive belt slip and belt wear detection|
JP5200970B2|2009-02-04|2013-06-05|富士ゼロックス株式会社|Quality control system, quality control device and quality control program|
US20100307221A1|2009-06-05|2010-12-09|Benjamin Morgan Smith|Belt measurement device|
JP2011070632A|2009-08-27|2011-04-07|Fujifilm Corp|Inspection system, mobile terminal, inspection method, and program|
US7946047B2|2009-09-25|2011-05-24|The Gates Corporation|Belt rib wear gauge|
US20120084220A1|2010-10-01|2012-04-05|Intertek Consumer Goods Na|Product certification system and method|JP5957215B2|2011-12-02|2016-07-27|株式会社ブリヂストン|Conveyor belt equipment|
US9589184B1|2012-08-16|2017-03-07|Groupon, Inc.|Method, apparatus, and computer program product for classification of documents|
US20140153789A1|2012-11-30|2014-06-05|Qualcomm Incorporated|Building boundary detection for indoor maps|
US9098914B2|2013-03-11|2015-08-04|Gates Corporation|Enhanced analysis for image-based serpentine belt wear evaluation|
US9436661B2|2014-01-31|2016-09-06|Dayco Ip Holdings, Llc|System and method for generating an interactive endless belt routing diagram|
US9342900B1|2014-12-23|2016-05-17|Ricoh Co., Ltd.|Distinguishing between stock keeping units using marker based methodology|
US9619899B2|2014-12-23|2017-04-11|Ricoh Co., Ltd.|Distinguishing between stock keeping units using hough voting methodology|
US9569851B2|2014-12-23|2017-02-14|Ricoh Co., Ltd.|Sequencing products recognized in a shelf image|
US9915338B2|2015-09-14|2018-03-13|Deere & Company|Belt wear indication|
BR112019008142A2|2016-10-24|2019-07-02|Wearhawk Pty Ltd|belt inspection system and method|
RU2638550C1|2016-12-07|2017-12-14|Акционерное общество "НИИ измерительных приборов - Новосибирский завод имени Коминтерна" /АО "НПО НИИИП-НЗиК"/|Method of space radar scanning |
RU2638557C1|2016-12-09|2017-12-14|Акционерное общество "НИИ измерительных приборов - Новосибирский завод имени Коминтерна" |Method of space radar scanning |
US10502690B2|2017-07-18|2019-12-10|Ford Global Technologies, Llc|Indicator system for vehicle wear components|
JP6438637B1|2017-07-27|2018-12-19|バンドー化学株式会社|Wear determination method and wear visualization device|
CN110945345A|2017-07-27|2020-03-31|阪东化学株式会社|Wear determination method and wear visualization device|
JP6939271B2|2017-08-31|2021-09-22|横浜ゴム株式会社|Conveyor belt monitoring system|
DE102017222964A1|2017-12-15|2019-06-19|Zf Friedrichshafen Ag|Method and system for damage detection of a component|
CN112833773B|2021-01-13|2022-01-14|无锡卡尔曼导航技术有限公司|High-precision real-time mu counting method for operation|
法律状态:
2018-12-11| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-08-11| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-04-20| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-07-06| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 06/09/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US13/226,266|2011-09-06|
US13/226,266|US8755589B2|2011-09-06|2011-09-06|Measurement of belt wear through edge detection of a raster image|
PCT/US2012/053818|WO2013036547A1|2011-09-06|2012-09-06|Measurement of belt wear through edge detection of a raster image|
[返回顶部]